Dance with Melody: An LSTM-autoencoder Approach to Music-oriented Dance Synthesis
Author
Abstract
Dance is greatly influenced by music. Studies on how to synthesize music-oriented dance choreography can promote research in many fields, such as dance teaching and human behavior research. Although considerable effort has been directed toward investigating the relationship between music and dance, the synthesis of appropriate dance choreography based on music remains an open problem. There are two main challenges: 1) how to choose appropriate dance figures, i.e., groups of steps that are named and specified in technical dance manuals, in accordance with music and 2) how to artistically enhance choreography in accordance with music. To solve these problems, in this paper, we propose a music-oriented dance choreography synthesis method using a long short-term memory (LSTM)-autoencoder model to extract a mapping between acoustic and motion features. Moreover, we improve our model with temporal indexes and a masking method to achieve better performance. Because of the lack of data available for model training, we constructed a music-dance dataset containing choreographies for four types of dance, totaling 907,200 frames of 3D dance motions and accompanying music, and extracted multidimensional features for model training. We employed this dataset to train and optimize the proposed models and conducted several qualitative and quantitative experiments to select the best-fitted model. Finally, our model proved to be effective and efficient in synthesizing valid choreographies that are also capable of musical expression.
Source
ACM Multimedia Conference on Multimedia Conference (MM '18)
Comments
論文内で存在する音楽とダンスがリンクしているダンス動画データセットの中で1番でかいって言ってる。
C : Cha-cha T : Tango R : Rumba W : Waltzについて
データセット不親切...説明がない...
URL
Tag